4 research outputs found

    Video Frame Interpolation for High Dynamic Range Sequences Captured with Dual-exposure Sensors

    Get PDF
    Video frame interpolation (VFI) enables many important applications thatmight involve the temporal domain, such as slow motion playback, or the spatialdomain, such as stop motion sequences. We are focusing on the former task,where one of the key challenges is handling high dynamic range (HDR) scenes inthe presence of complex motion. To this end, we explore possible advantages ofdual-exposure sensors that readily provide sharp short and blurry longexposures that are spatially registered and whose ends are temporally aligned.This way, motion blur registers temporally continuous information on the scenemotion that, combined with the sharp reference, enables more precise motionsampling within a single camera shot. We demonstrate that this facilitates amore complex motion reconstruction in the VFI task, as well as HDR framereconstruction that so far has been considered only for the originally capturedframes, not in-between interpolated frames. We design a neural network trainedin these tasks that clearly outperforms existing solutions. We also propose ametric for scene motion complexity that provides important insights into theperformance of VFI methods at the test time.<br

    {HDR} Denoising and Deblurring by Learning Spatio-temporal Distortion Model

    Get PDF
    We seek to reconstruct sharp and noise-free high-dynamic range (HDR) video from a dual-exposure sensor that records different low-dynamic range (LDR) information in different pixel columns: Odd columns provide low-exposure, sharp, but noisy information; even columns complement this with less noisy, high-exposure, but motion-blurred data. Previous LDR work learns to deblur and denoise (DISTORTED->CLEAN) supervised by pairs of CLEAN and DISTORTED images. Regrettably, capturing DISTORTED sensor readings is time-consuming; as well, there is a lack of CLEAN HDR videos. We suggest a method to overcome those two limitations. First, we learn a different function instead: CLEAN->DISTORTED, which generates samples containing correlated pixel noise, and row and column noise, as well as motion blur from a low number of CLEAN sensor readings. Second, as there is not enough CLEAN HDR video available, we devise a method to learn from LDR video in-stead. Our approach compares favorably to several strong baselines, and can boost existing methods when they are re-trained on our data. Combined with spatial and temporal super-resolution, it enables applications such as re-lighting with low noise or blur

    Deep Joint Deinterlacing and Denoising for Single Shot Dual-{ISO HDR} Reconstruction

    No full text
    corecore